We previously defined the Divide step as recursively splitting the array $A$ until the base case is met. Let's now visualize this process, which determines the depth of the Merge Sort recursion tree and accounts for the $O(\log n)$ component of the overall complexity.
- The Division process is computationally inexpensive, requiring $O(1)$ time per split, as it only involves calculating the midpoint (mid) and slicing the array $A$.
- The recursion is defined by two key points: the recursive call on the split halves (the Conquer stage) and the base case.
- The base case is an array of size $n \le 1$. When the array is reduced to a single element, it is inherently sorted and immediately returned, ready to be passed up the recursive stack for the subsequent Combine (Merge) step.
- Because we halve the array size $n$ at every step, the total number of splitting levels necessary before reaching the base case is $\lceil \log_2 n \rceil$. This structure is foundational to Merge Sort's superior time complexity, $O(n \log n)$.
Python Implementation: merge_sort (Division)
1def merge_sort(A):
2 n = len(A)
3 if n <= 1:
4 return A # Base Case: Stop Division
5
6 # 1. Divide: O(1) step to find midpoint
7 mid = n // 2
8 Left = A[:mid]
9 Right = A[mid:]
10
11 # 2. Conquer: Recursive calls drive the depth
12 Left_Sorted = merge_sort(Left)
13 Right_Sorted = merge_sort(Right)
14
15 # ... Combine step omitted for this slide
16 # return merge(Left_Sorted, Right_Sorted)